21 research outputs found

    FPGA Realizations of Walsh Transforms for Different Transform and Word lengths into Xilinx and Altera Chips

    Get PDF
    This paper presents FPGA realizations of Walsh transforms. The realizations are targetted for the system of arbitrary waveform generation, addition/ subtraction, multiplication, and processing of several signals based on Walsh transforms which is defined in term products of Rademacher functions. Input signals are passing through the system in serial, the output either signals or coefficients are also passing out in serial. To minimize the area utilization when the systems are realized in FPGA chips, the word lengths of every processing step have been designed carefully. Based on this, FPGA realizations of those various applications into Xilinx and Altera chips have been done. In Xilinx realizations, Xilinx ISE was used to display the results and to extract some critical parameters such as speed and static power. Meanwhile, the realizations into Altera chips have been conducted using Quartus. Comparisons of speed and power among Xilinx and Altera chip realizations are presented here even though this is not an apple to apple comparison. Finally, it can be concluded that Walsh transforms can be realized not only for the applications that have been done here, but it is potential can be used for other applications

    Design of Real Time Walsh Transform for Processing of Multiple Digital Signals

    Get PDF
    This paper presents the design and implementation of multiple digital signals processing using real-time Walsh transforms. The design of real time Walsh transform is done in such a way that it starts producing outputs instantly even before all input data have entered the system. The system consists of Walsh Transform circuit, several Digital Signal Processing (DSP) circuits, and an inverse Walsh transform circuit. The real time Walsh and inverse Walsh transforms are also designed to produce right results for any possible combinations of input data. DSP blocks are able to perform addition, subtraction, and dyadic convolution process of Walsh coefficients of more than one digital signals. Comparisons to the previous methods are briefly presented. It was found that the design of real time Walsh transform structure has better performance than many of the previously reported results in the literature.DOI:http://dx.doi.org/10.11591/ijece.v3i2.196

    FPGA Hardware Realization: Addition of Two Digital Signals Based on Walsh Transforms

    Get PDF
    This paper presents hardware realization of addition of two digital signals based on Walsh transforms and inverse Walsh transforms targeted to the Xilinx FPGA Spartan 3 board. The realization utilizes Walsh Transform to convert the input data to frequency domain and the inverse Walsh transform to reconvert the data from frequency domain. The designed system is capable of performing addition, subtraction, multiplication and Arbitrary Waveform Generation (AWG). However, in the present work, the hardware realization of addition only has been demonstrated. The Clock frequency for realization into the board is supplied by an external function generator. Output results are captured using a logic analyzer. Input data to the board (system) is passed manually through the available slide switches on-board

    Efficient blockchain-based group key distribution for secure authentication in VANETs

    Get PDF
    This paper proposes a group key distribution scheme using smart contract-based blockchain technology. The smart contract’s functions allow for securely distributing the group session key, following the initial legitimacy detection using public key infrastructure-based authentication. For message authentication, we propose a lightweight symmetric key cryptography-based group signature method, supporting the security and privacy requirements of vehicular ad hoc networks (VANETs). Our discussion examined the scheme’s robustness against typical adversarial attacks. To evaluate the gas costs associated with smart contract’s functions, we implemented it on the Ethereum main network. Finally, comprehensive analyses of computation and communication costs demonstrate the scheme’s effectiveness

    Blockchain-based secret key extraction for efficient and secure authentication in VANETs

    Get PDF
    Intelligent transportation systems are an emerging technology that facilitates real-time vehicle-to-everything communication. Hence, securing and authenticating data packets for intra- and inter-vehicle communication are fundamental security services in vehicular ad-hoc networks (VANETs). However, public-key cryptography (PKC) is commonly used in signature-based authentication, which consumes significant computation resources and communication bandwidth for signatures generation and verification, and key distribution. Therefore, physical layer-based secret key extraction has emerged as an effective candidate for key agreement, exploiting the randomness and reciprocity features of wireless channels. However, the imperfect channel reciprocity generates discrepancies in the extracted key, and existing reconciliation algorithms suffer from significant communication costs and security issues. In this paper, PKC-based authentication is used for initial legitimacy detection and exchanging authenticated probing packets. Accordingly, we propose a blockchain-based reconciliation technique that allows the trusted third party (TTP) to publish the correction sequence of the mismatched bits through a transaction using a smart contract. The smart contract functions enable the TTP to map the transaction address to vehicle-related information and allow vehicles to obtain the transaction contents securely. The obtained shared key is then used for symmetric key cryptography (SKC)-based authentication for subsequent transmissions, saving significant computation and communication costs. The correctness and security robustness of the scheme are proved using Burrows–Abadi–Needham (BAN)-logic and Automated Validation of Internet Security Protocols and Applications (AVISPA) simulator. We also discussed the scheme’s resistance to typical attacks. The scheme’s performance in terms of packet delay and loss ratio is evaluated using the network simulator (OMNeT++). Finally, the computation analysis shows that the scheme saves ~99% of the time required to verify 1000 messages compared to existing PKC-based schemes

    Hardware-Based Hopfield Neuromorphic Computing for Fall Detection

    Get PDF
    With the popularity of smart wearable systems, sensor signal processing poses more challenges to machine learning in embedded scenarios. For example, traditional machine-learning methods for data classification, especially in real time, are computationally intensive. The deployment of Artificial Intelligence algorithms on embedded hardware for fast data classification and accurate fall detection poses a huge challenge in achieving power-efficient embedded systems. Therefore, by exploiting the associative memory feature of Hopfield Neural Network, a hardware module has been designed to simulate the Neural Network algorithm which uses sensor data integration and data classification for recognizing the fall. By adopting the Hebbian learning method for training neural networks, weights of human activity features are obtained and implemented/embedded into the hardware design. Here, the neural network weight of fall activity is achieved through data preprocessing, and then the weight is mapped to the amplification factor setting in the hardware. The designs are checked with validation scenarios, and the experiment is completed with a Hopfield neural network in the analog module. Through simulations, the classification accuracy of the fall data reached 88.9% which compares well with some other results achieved by the software-based machine-learning algorithms, which verify the feasibility of our hardware design. The designed system performs the complex signal calculations of the hardware’s feedback signal, replacing the software-based method. A straightforward circuit design is used to meet the weight setting from the Hopfield neural network, which is maximizing the reusability and flexibility of the circuit design

    Next-generation IoT: harnessing AI for enhanced localization and energy harvesting in backscatter communications

    Get PDF
    Ongoing backscatter communications and localisation research have been able to obtain incredibly accurate results in controlled environments. The main issue with these systems is faced in complex RF environments. This paper investigates concurrent localization and ambient radio frequency (RF) energy harvesting using backscatter communication systems for Internet of Things networks. Dynamic real-world environments introduce complexity from multipath reflection and shadowing, as well as interference from movements. A machine learning framework leveraging K-Nearest Neighbors and Random Forest classifiers creates robustness against such variability. Historically, received signal measurements construct a location fingerprint database resilient to perturbations. The Random Forest model demonstrates precise localization across customized benches with programmable shuffling of chairs outfitted with RF identification tags. Average precision accuracy exceeds 99% despite deliberate placement modifications, inducing signal fluctuations emulating mobility and clutter. Significantly, directional antennas can harvest over −3 dBm, while even omnidirectional antennas provide −10 dBm—both suitable for perpetually replenishing low-energy electronics. Consequently, the intelligent backscatter platform localizes unmodified objects to customizable precision while promoting self-sustainability

    Mobility management-based autonomous energy-aware framework using machine learning approach in dense mobile networks

    Get PDF
    A paramount challenge of prohibiting increased CO2 emissions for network densification is to deliver the Fifth Generation (5G) cellular capacity and connectivity demands, while maintaining a greener, healthier and prosperous environment. Energy consumption is a demanding consideration in the 5G era to combat several challenges such as reactive mode of operation, high latency wake up times, incorrect user association with the cells, multiple cross-functional operation of Self-Organising Networks (SON), etc. To address this challenge, we propose a novel Mobility Management-Based Autonomous Energy-Aware Framework for analysing bus passengers ridership through statistical Machine Learning (ML) and proactive energy savings coupled with CO2 emissions in Heterogeneous Network (HetNet) architecture using Reinforcement Learning (RL). Furthermore, we compare and report various ML algorithms using bus passengers ridership obtained from London Overground (LO) dataset. Extensive spatiotemporal simulations show that our proposed framework can achieve up to 98.82% prediction accuracy and CO2 reduction gains of up to 31.83%

    Comparative analysis of an urban LoRaWAN deployment: real world vs. simulation

    Get PDF
    LoRaWAN simulations are a flexible way to analyse the behaviour of this LPWAN technology in scenarios that are unfeasible to deploy due to their scale and the number of devices required. Parallel to this, there is also a continued lack of larger-scale LoRaWAN deployments in current literature. Crucially, none of these studies involves comparison with any theoretical model, such as discrete-time simulation or mathematical analysis, for validation. In this paper we deploy a 20 nodes LoRaWAN network around the University of Glasgow’s campus, analyse the results and then proceed to develop an NS-3 simulation to recreate and match as faithfully as possible the behaviour and topology of the physical deployment. The performance of both the deployment and the simulation is then compared, and the results show that while the complexity of the simulation is kept relatively low, it is possible to get simulation results within about 20% of the deployment results
    corecore